An Expectation-Maximization Algorithm to Compute a Stochastic Factorization From Data
نویسندگان
چکیده
When a transition probability matrix is represented as the product of two stochastic matrices, swapping the factors of the multiplication yields another transition matrix that retains some fundamental characteristics of the original. Since the new matrix can be much smaller than its precursor, replacing the former for the latter can lead to significant savings in terms of computational effort. This strategy, dubbed the “stochastic-factorization trick,” can be used to compute the stationary distribution of a Markov chain, to determine the fundamental matrix of an absorbing chain, and to compute a decision policy via dynamic programming or reinforcement learning. In this paper we show that the stochastic-factorization trick can also provide benefits in terms of the number of samples needed to estimate a transition matrix. We introduce a probabilistic interpretation of a stochastic factorization and build on the resulting model to develop an algorithm to compute the factorization directly from data. If the transition matrix can be well approximated by a low-order stochastic factorization, estimating its factors instead of the original matrix reduces significantly the number of parameters to be estimated. Thus, when compared to estimating the transition matrix directly via maximum likelihood, the proposed method is able to compute approximations of roughly the same quality using less data. We illustrate the effectiveness of the proposed algorithm by using it to help a reinforcement learning agent learn how to play the game of blackjack.
منابع مشابه
Incremental Stochastic Factorization for Online Reinforcement Learning
A construct that has been receiving attention recently in reinforcement learning is stochastic factorization (SF), a particular case of non-negative factorization (NMF) in which the matrices involved are stochastic. The idea is to use SF to approximate the transition matrices of a Markov decision process (MDP). This is useful for two reasons. First, learning the factors of the SF instead of the...
متن کاملMultifactor Expectation Maximization for Factor Graphs
Factor graphs allow large probability distributions to be stored efficiently and facilitate fast computation of marginal probabilities, but the difficulty of training them has limited their use. Given a large set of data points, the training process should yield factors for which the observed data has a high likelihood. We present a factor graph learning algorithm which on each iteration merges...
متن کاملShift-Invariant Probabilistic Latent Component Analysis
In this paper we present a model which can decompose a probability densities or count data into a set of shift invariant components. We begin by introducing a regular latent variable model and subsequently extend it to deal with shift invariance in order to model more complex inputs. We develop an expectation maximization algorithm for estimating components and present various results on challe...
متن کاملExpectation Truncation and the Benefits of Preselection In Training Generative Models
We show how a preselection of hidden variables can be used to efficiently train generative models with binary hidden variables. The approach is based on Expectation Maximization (EM) and uses an efficiently computable approximation to the sufficient statistics of a given model. The computational cost to compute the sufficient statistics is strongly reduced by selecting, for each data point, the...
متن کاملTechnical Note: An Expectation-Maximization Algorithm to Estimate the Parameters of the Markov Chain Choice Model
We develop an expectation-maximization algorithm to estimate the parameters of the Markov chain choice model. In this choice model, a customer arrives into the system to purchase a certain product. If this product is available for purchase, then the customer purchases it. Otherwise, the customer transitions between the products according to a transition probability matrix until she reaches an a...
متن کامل